Paul Tagliamonte: Upcoming Talks
Hello, World!
I ve got two upcoming talks - one at LibrePlanet 2014 about OpenPGP, and one at PyCon US 2014 (in Canada) on Hy. Slides will be posted as I give the talks.
Hope to see folks there!
docker.io
, so go ahead and sudo apt-get install docker.io
whenever you want :)
The first two uploads have a few errors, which are 100% my fault. The first were a set of FTBFS bugs, which were a stupid error on my part.
Thanks to olasd for catching the remaining bug, related to stripping the binaries.
I m so sorry this happened, it slipped through as a result of me having a local docker
binary in /usr/local
, which I tested completely before uploading a totally different binary. I won t let it happen again.
It should be fixed now.
However, this comes with a warning. It appears as though systemd
(which, for the record I adore) is allowing lxc-start
unmount /dev/pts
and friends, which causes a bunch of damage to the host.
I ve filed the bug as bug #734813.
Currently the outdated mount(1)
is holding back a workaround. Hopefully we get util-linux
updated so we can hack around this in the short-term. Hopefully the bigger issues get solved.
So, if you re a systemd
user, please hold off on using docker.io
until we resolve this issue in Debian.
Changes from Hy 0.9.11 tl;dr: 0.9.12 comes with some massive changes, We finally took the time to implement gensym, as well as a few other bits that help macro writing. Check the changelog for what exactly was added. The biggest feature, Reader Macros, landed later in the cycle, but were big enough to warrent a release on it's own. A huge thanks goes to Foxboron for implementing them and a massive hug goes out to olasd for providing ongoing reviews during the development. Welcome to the new Hy contributors, Henrique Carvalho Alves, Kevin Zita and Kenan B l kba . Thanks for your work so far, folks! Hope y'all enjoy the finest that 2013 has to offer, - Hy Society * Special thanks goes to Willyfrog, Foxboron and theanalyst for writing 0.9.12's NEWS. Thanks, y'all! (PT) [ Language Changes ] * Translate foo? -> is_foo, for better Python interop. (PT) * Reader Macros! * Operators + and * now can work without arguments * Define kwapply as a macro * Added apply as a function * Instant symbol generation with gensym * Allow macros to return None * Add a method for casting into byte string or unicode depending on python version * flatten function added to language * Add a method for casting into byte string or unicode depending on python version * Added type coercing to the right integer for the platform [ Misc. Fixes ] * Added information about core team members * Documentation fixed and extended * Add astor to install_requires to fix hy --spy failing on hy 0.9.11. * Convert stdout and stderr to UTF-8 properly in the run_cmd helper. * Update requirements.txt and setup.py to use rply upstream. * tryhy link added in documentation and README * Command line options documented * Adding support for coverage tests at coveralls.io * Added info about tox, so people can use it prior to a PR * Added the start of hacking rules * Halting Problem removed from example as it was nonfree * Fixed PyPI is now behind a CDN. The --use-mirrors option is deprecated. * Badges for pypi version and downloads. [ Syntax Fixes ] * get allows multiple arguments [ Bug Fixes ] * OSX: Fixes for readline Repl problem which caused HyREPL not allowing 'b' * Fix REPL completions on OSX * Make HyObject.replace more resilient to prevent compiler breakage. [ Contrib changes ] * Anaphoric macros added to contrib * Modified eg/twisted to follow the newer hy syntax * Added (experimental) profile module
(rules "personal" (rule "pault.ag" pingable httpable) (rule "lucifer.pault.ag" pingable httpable))Which expands out to quite a few lines that query the server s status using Snitch s informants, and insert the result into MongoDB. More to come!
pip
sucks and shouldn't exist), nor do I fit in with the Python
hardliners (which is to say apt
and dpkg
are out of date, and neither have
a place on a Development machine).
I think our discourse on this topic has become petty and stupid in general.
Let's all try to step back and drop a bit of the attitude.
pip
doesn't suck, and neither does apt
.
The truth is, both sides are wrong. As with any subject, the real
answer here is much more nuanced than either side presents it. I'm going to
try and present my opinion on this, in the way that both my Pythonista self
and my Debianite self see the issue. Hopefully I can keep this short, to
the point, and caked with logic.
The case for dpkg
(the Debianite in me)
In defense of dpkg
and apt
, imagine having to install python-gnome2
on all your systems when you install. It'd be hell on earth.
Imagine having a user try to do this. It's insane to assume that
end-users will be using pip
for this purpose.
pip
is fun and all, but it's also installing 100% untrusted code to your
system (perhaps as root, if you're using pip
with sudo
for some reason),
and hasn't been reviewed for software freeness, which is something Debian
(and Debian users) take seriously. This isn't even to mention the hell that
pip
wreaks on dpkg
controlled files / packages.
Try to remember how much of your system running (yes, right now) is running
because of Python or Python modules. Try to imagine how much of a pain in
the ass it'd be if you couldn't boot into GNOME
to use nm-applet
to connect
to wifi to pip
install something. I'm sure even the most extreme pip'er
understands the need for Operating System level package management.
Debian also has a bigger problem scope - we're not maintaining a library
in Debian for kicks, we're maintaining it so that end user applications may
use the library. When we update something like Django
, we have to make sure
that we don't break anything using it (although, to be honest, the fact that we
package webapps is an entire rant for later) before we get to update it to the
newest release.
Hell, with a few coffees, I could automate the process of releasing a .deb
with a new upstream release, 100% unattended. I won't, however, since this is
an insane idea. Let's go over a brief list of things I do before uploading a
new package:
pickle
files, etc).piuparts
, and adequate
, etc).stable
release.Red Hat
, or Ubuntu
, and
community distros, such as Fedora
or Arch
.
It's also not Debian's job to package the world in the archive. This is an
insane task, and it's not Debian's place to do it. We introduce libraries
as things need them, not because we wrote some new library that someone
may find slightly useful at some point in the future. maybe.
Upstream developers and language communities (not only Python here) tend to
lose sight of why we're doing this in the first place, which
is our users. This isn't some sort of technical pissing contest to see who can
distribute the software in the best way. Debian-folk always keep end users
as our highest priority.
I quote the
Debian Social Contract, when I say
that Our priorities are our users and free software. No one's trying to
get developers to use dpkg
to create software. In fact, as you'll see
below, I actively discourage using system modules for development.
The case for pip
(the Pythonista in me)
In defense of pip
, the idea that Debian will keep the latest versions of
packages is insane. The idea that we can keep pace with upstream releases is
nuts, and the idea that every upstream release on pypi
is ready to ship is
bananas. b-a-n-a-n-a-annas.
As a developer, I don't want to support every release, and I surely don't want
other people depending on some random snapshot.
Often times, I'll put stuff up on pypi
as a preview, or to release often, and
solicit feedback without having to give out instructions on using a git
checkout (it's also easier to have them try a version from pypi
so I can
cross-ref the git tag to reproduce issues when they file them)
pypi
is easy, ubiquitous and works regardless of the platform, which means
less of my development time is spent packaging stuff up for platforms I don't
really care about (Arch
, Fedora
, OSX
, Windows
), even though I value
feedback from users on those systems. The effort it takes to release something
is limited to python setup.py sdist upload
, and it's in a place (and in a
shape) that anyone can use it without having 10 sets of platform-local
instructions.
Even ignoring all the above, when I'm writing a new app or bit of code,
I want to be sure I'm targeting the latest version of the code I depend on,
so that future changes to API won't hit me as hard. By following
along with my dependencies' development, I can be sure that my code breaks
early, and breaks in development, not production. Upstreams also tend to not
like bug reports against old branches, so ensuring I have the latest code from
pypi
means I can properly file bugs.
Lastly, I prefer virtualenv
based setups for development, since I'm usually
working on many things at once. This often means version mismatches in
libraries, which brings in API changes (another whole rant here as well).
I don't want to keep installing and uninstalling packages to switch between
the two projects, and using a chroot(8)
means a lot of overhead and that it's
disconnected from my development environment / filesystem, so I resort to
virtualenv
to isolate my Development environment.
Final notes
I don't want to keep arguing about this. Just accept that the world's a big
place and that there exist use-cases that both apt
and pip
need to exist
and work in the way they're working now. At the very least, try and understand
there exist smart people on both sides, and no one is trying to screw anyone
over or keep their own little private club to themselves. Hopefully, going
forward, we can make sure that the integration between these two tools gets
better, not worse.
Help make this dream a reality. Contribute to a productive tone, not a
destructive one. In short:
pip
without sudo
always. Don't tell people to use sudo
.apt
or dpkg
when deploying system-wide.pip
and virtualenv
in development setups, so we can upgrade your
app when we upgrade the lib.
$ ssh master.d.o
ssh: Could not resolve hostname master.d.o: Name or service not known
Sadsies. Now, let s make it work:
$ ssh master.d.o whoami
ssh: Could not resolve hostname master.d.o: Name or service not known
$ sudo apt-get install -t experimental olla
# (restart your terminal)
$ ssh master.d.o whoami
paultag
Thank you, thank you! I ll be here all week!
Anyway. Have fun, kids! Remember, hacks are fun!
DTRT
with it (e.g. dashes turn to underscores,
earmuffs turn to all-caps.)true_division
);; good:
(with [fd (open "/etc/passwd")]
(print (.readlines fd)))
;; bad:
(with [fd (open "/etc/passwd")]
(print (fd.readlines)))
threading macro
throughout code
where it makes sense.
;; good:
(import [sh [cat grep]])
(-> (cat "/usr/share/dict/words") (grep "-E" "tag$"))
;; bad:
(import [sh [cat grep]])
(grep (cat "/usr/share/dict/words") "-E" "tag$")
;; good (and prefered):
(defn fib [n]
(if (<= n 2)
n
(+ (fib (- n 1)) (fib (- n 2)))))
;; still OK:
(defn fib [n]
(if (<= n 2) n (+ (fib (- n 1)) (fib (- n 2)))))
;; still OK:
(defn fib [n]
(if (<= n 2)
n
(+ (fib (- n 1)) (fib (- n 2)))))
;; Stupid as hell
(defn fib [n]
(if (<= n 2)
n
(+ (fib (- n 1)) (fib (- n 2)))))
;; good (and prefered):
(defn fib [n]
(if (<= n 2)
n
(+ (fib (- n 1)) (fib (- n 2)))))
;; Stupid as hell
(defn fib [n]
(if (<= n 2)
n
(+ (fib (- n 1)) (fib (- n 2)))
)
) ; GAH, BURN IT WITH FIRE
;; bad (and evil)
(defn foo (x) (print x))
(foo 1)
;; good (and prefered):
(defn foo [x] (print x))
(foo 1)
mentors.debian.net
(actually sending the first messages from Debian s infrastructure)mentors.debian.net
was chosen because I m an admin and could do the integration quickly. That involved backporting the eleven aforementioned packages, plus zeromq3 and python-zmq (that only have TCP_KEEPALIVE on recent versions), to wheezy, as that s what the mentors.d.n
host is running. (Also, python-zmq needs a new-ish cython to build so I had to backport that too). Thankfully, those were no-changes backports, that were easily scripted, using a pbuilder hook to allow the packages to depend on previously built packages.
I have made a wheezy package repository available here. It s signed with my GnuPG key, ID 0xB8E5087766475AAF, which should be fairly well connected.
Code changes
After Simon s initial setup of debexpo (which is not an easy task), the code changes have been fairly simple (yes, this is just a proof of concept). You can see them on top of the live branch on debexpo s sources. I finally had the time to make them live earlier this week, and mentors.debian.net
has been sending messages on Debian s fedmsg bus ever since.
Deployment
mentors.d.n
sends its messages on five endpoints, tcp://mentors.debian.net:3000
through tcp://mentors.debian.net:3004
. That is one endpoint per WSGI worker, plus one for the importer process(es). You can tap in directly, by following the instructions below.
debmessenger
Debmessenger is the stop-gap email-to-fedmsg bridge that Simon is developing. The goal is to create some activity on the bus without disrupting or modifying any infrastructure service. It s written in hy, and it leverages the existing Debian-related python modules to do its work, using inotify to react when a mail gets dropped in a Maildir.
Right now, it s supposed to understand changes mails (received from debian-devel-changes) and bugs mail (from debian-bugs-dist).
I ll work on deploying an instance of debmessenger this weekend, to create some more traffic on the bus.
Reliability of the bus
I suggested using fedmsg as this was something that already existed, and that solved a problem identical to the one we wanted to tackle (open interconnection of a distribution s infrastructure services). Reusing a piece of infrastructure that already works in another distro means that we can share tools, share ideas, and come up with solutions that we might not have considered when working alone. The drawback is that we have to either adapt to the tool s idiosyncrasies, or to adapt the tool to our way of working.
One of the main points raised by DSA when the idea of using fedmsg was brought up, was that of reliability. Debian s infrastructure is spread in datacenters (and basements ) all over the world, and thus faces different challenges than Fedora s infrastructure, which is more tightly integrated. Therefore, we have to ensure that a critical consumer (say, a buildd) doesn t miss any message it would need for its operation (say, that a package got accepted).
There has been work upstream, to ensure that fedmsg doesn t lose messages, but we need to take extra steps to make sure that a given consumer can replay the messages it has missed, should the need arise. Simon has started a discussion on the upstream mailing list, and is working on a prototype replay mechanism. Obviously, we need to test scenarios of endpoints dropping off the grid, hence the work on getting some activity on the bus.
How can I take a look?
a.k.a. Another one rides the bus
(Picture Yves-Laurent Allaert, CC-By-SA v2.5 / GFDL v1.2 license)
So, the bus is pretty quiet right now, as only two kinds of events are triggering messages: a new upload to mentors.debian.net, and a new comment on a package there. Don t expect a lot of traffic. However, generating some traffic is easy enough: just login to mentors.d.n, pick a package of mine (not much choice there), or a real package you want to review, and leave a comment. poof, a message appears.
For the lazy
Join #debian-fedmsg
on OFTC, and look for messages from the debmsg
bot.
Current example output:
01:30:25 <debmsg> debexpo.voms-api-java.upload (unsigned) --
02:03:16 <debmsg> debexpo.ocamlbricks.comment (unsigned) --
(definitely needs some work, but it s a start)
Listening in by yourself
You need to setup fedmsg. I have a repository of wheezy packages and one of sid packages, signed with my GnuPG key, ID 0xB8E5087766475AAF. You can add them to a file in /etc/apt/sources.list.d like this:
deb http://perso.crans.org/dandrimont/fedmsg-<sid wheezy>/ ./
Then, import my GnuPG key into apt (apt-key add
), update your sources (apt-get update
), and install fedmsg (apt-get install python-fedmsg
). The versions are <<
to anything real, so you should get the real thing as soon as it hits the archive.
Finally, in /etc/fedmsg.d/endpoints.py
, you can comment-out the Fedora entries, and add a Debian entry like this:
"debian": [
"tcp://fedmsg.olasd.eu:9940",
],
fedmsg.olasd.eu
runs a fedmsg gateway connected to the mentors.d.n
endpoints, and thus forwards all the mentors messages. It ll be connected to debmessenger as soon as it s running too.
To actually see mesages, disable validate_signatures
in /etc/fedmsg.d/ssl.py
, setting it to False
. The Debian messages aren t signed yet (it s on the roadmap), and we don t ship the Fedora certificates so we can t authenticate their messages either.
Finally, you can run fedmsg-tail --really-pretty
in a terminal. As soon as there s some activity, you should get that kind of output (color omitted):
"i": 1,
"msg":
"version": "2.0.9-1.1",
"uploader": "Emmanuel Bourg <ebourg@apache.org>"
,
"topic": "org.debian.dev.debexpo.voms-api-java.upload",
"username": "expo",
"timestamp": 1373758221.491809
Enjoy real-time updates from your favorite piece of infrastructure!
What s next?
While Simon continues working on reliability, and gets started on message signing according to his schedule, I ll take a look at deploying the debmessenger bridge, and making the pretty-printer outputs useful for our topics. There will likely be some changes to the messages sent by debexpo, as we got some feedback from the upstream developers about making them work in the fedmsg tool ecosystem (datanommer and datagrepper come to mind).
You can tune in to Simon s weekly reports on the soc-coordination list, and look at the discussions with upstream on the fedora messaging-sig list. You can also catch us on IRC, #debian-soc
on OFTC. We re also hanging out on the upstream channel, #fedora-apps
on freenode.
Bob Tolbert Christopher Allan Webber Duncan McGreggor Guillermo Vaya Joe H. Rahme Julien Danjou Konrad Hinsen Morten Linderud Nicolas Dandrimont Ralph Moritz rogererens Thomas Ballinger Tuukka TurtoOutstanding! New features are now being considered for 0.9.11. Thanks!
Next.